# High-precision NLU

Bert Uncased Intent Classification
Apache-2.0
This is a fine-tuned model based on BERT, used to classify user inputs into 82 different intents, suitable for dialogue systems and natural language understanding tasks.
Text Classification Transformers English
B
yeniguno
1,942
1
Deberta Xlarge Mnli
MIT
DeBERTa-XLarge-MNLI is an enhanced BERT model based on the disentangled attention mechanism, fine-tuned on the MNLI task with 750M parameters, excelling in natural language understanding tasks.
Large Language Model Transformers English
D
microsoft
833.58k
19
Deberta Base
MIT
DeBERTa is an improved BERT model based on the disentangled attention mechanism and enhanced masked decoder, excelling in multiple natural language understanding tasks.
Large Language Model English
D
microsoft
298.78k
78
Deberta V2 Xlarge Mnli
MIT
DeBERTa V2 XLarge is an enhanced natural language understanding model developed by Microsoft, which improves the BERT architecture through a disentangled attention mechanism and enhanced masked decoder, outperforming BERT and RoBERTa on multiple NLU tasks.
Large Language Model Transformers English
D
microsoft
51.59k
9
Deberta Xlarge
MIT
DeBERTa improves upon BERT and RoBERTa models with a disentangled attention mechanism and enhanced masked decoder, demonstrating superior performance in most natural language understanding tasks.
Large Language Model Transformers English
D
microsoft
312
2
Deberta Base Mnli
MIT
Enhanced BERT decoding model based on disentangled attention mechanism, fine-tuned on MNLI task
Large Language Model English
D
microsoft
96.92k
6
V3large 2epoch
MIT
DeBERTa is an enhanced BERT improvement model based on the disentangled attention mechanism. With 160GB of training data and 1.5 billion parameters, it surpasses the performance of BERT and RoBERTa in multiple natural language understanding tasks.
Large Language Model Transformers English
V
NDugar
31
0
1epochv3
MIT
DeBERTa is an enhanced BERT model based on the disentangled attention mechanism, surpassing BERT and RoBERTa in multiple natural language understanding tasks
Large Language Model Transformers English
1
NDugar
28
0
ZSD Microsoft V2xxlmnli
MIT
An enhanced BERT decoding model based on the decoupled attention mechanism, a large-scale version fine-tuned on the MNLI task.
Large Language Model Transformers English
Z
NDugar
59
3
Debertav3 Mnli Snli Anli
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism, which improves upon BERT and RoBERTa models and performs better in most natural language understanding tasks.
Large Language Model Transformers English
D
NDugar
26
3
V3large 1epoch
MIT
DeBERTa is an enhanced BERT decoder model based on the disentangled attention mechanism, excelling in natural language understanding tasks.
Large Language Model Transformers English
V
NDugar
32
0
V2xl Again Mnli
MIT
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism. By improving the attention mechanism and masked decoder, it surpasses the performance of BERT and RoBERTa in multiple natural language understanding tasks.
Large Language Model Transformers English
V
NDugar
30
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase